1,936 research outputs found
Noisy Rumor Spreading and Plurality Consensus
Error-correcting codes are efficient methods for handling \emph{noisy}
communication channels in the context of technological networks. However, such
elaborate methods differ a lot from the unsophisticated way biological entities
are supposed to communicate. Yet, it has been recently shown by Feinerman,
Haeupler, and Korman {[}PODC 2014{]} that complex coordination tasks such as
\emph{rumor spreading} and \emph{majority consensus} can plausibly be achieved
in biological systems subject to noisy communication channels, where every
message transferred through a channel remains intact with small probability
, without using coding techniques. This result is a
considerable step towards a better understanding of the way biological entities
may cooperate. It has been nevertheless be established only in the case of
2-valued \emph{opinions}: rumor spreading aims at broadcasting a single-bit
opinion to all nodes, and majority consensus aims at leading all nodes to adopt
the single-bit opinion that was initially present in the system with (relative)
majority. In this paper, we extend this previous work to -valued opinions,
for any .
Our extension requires to address a series of important issues, some
conceptual, others technical. We had to entirely revisit the notion of noise,
for handling channels carrying -\emph{valued} messages. In fact, we
precisely characterize the type of noise patterns for which plurality consensus
is solvable. Also, a key result employed in the bivalued case by Feinerman et
al. is an estimate of the probability of observing the most frequent opinion
from observing the mode of a small sample. We generalize this result to the
multivalued case by providing a new analytical proof for the bivalued case that
is amenable to be extended, by induction, and that is of independent interest.Comment: Minor revisio
KADABRA is an ADaptive Algorithm for Betweenness via Random Approximation
We present KADABRA, a new algorithm to approximate betweenness centrality in
directed and undirected graphs, which significantly outperforms all previous
approaches on real-world complex networks. The efficiency of the new algorithm
relies on two new theoretical contributions, of independent interest. The first
contribution focuses on sampling shortest paths, a subroutine used by most
algorithms that approximate betweenness centrality. We show that, on realistic
random graph models, we can perform this task in time
with high probability, obtaining a significant speedup with respect to the
worst-case performance. We experimentally show that this new
technique achieves similar speedups on real-world complex networks, as well.
The second contribution is a new rigorous application of the adaptive sampling
technique. This approach decreases the total number of shortest paths that need
to be sampled to compute all betweenness centralities with a given absolute
error, and it also handles more general problems, such as computing the
most central nodes. Furthermore, our analysis is general, and it might be
extended to other settings.Comment: Some typos correcte
Pooling or sampling: Collective dynamics for electrical flow estimation
The computation of electrical flows is a crucial primitive for many recently proposed optimization algorithms on weighted networks. While typically implemented as a centralized subroutine, the ability to perform this task in a fully decentralized way is implicit in a number of biological systems. Thus, a natural question is whether this task can provably be accomplished in an efficient way by a network of agents executing a simple protocol. We provide a positive answer, proposing two distributed approaches to electrical flow computation on a weighted network: a deterministic process mimicking Jacobi's iterative method for solving linear systems, and a randomized token diffusion process, based on revisiting a classical random walk process on a graph with an absorbing node. We show that both processes converge to a solution of Kirchhoff's node potential equations, derive bounds on their convergence rates in terms of the weights of the network, and analyze their time and message complexity
Minimizing Message Size in Stochastic Communication Patterns: Fast Self-Stabilizing Protocols with 3 bits
This paper considers the basic model of communication, in
which in each round, each agent extracts information from few randomly chosen
agents. We seek to identify the smallest amount of information revealed in each
interaction (message size) that nevertheless allows for efficient and robust
computations of fundamental information dissemination tasks. We focus on the
Majority Bit Dissemination problem that considers a population of agents,
with a designated subset of source agents. Each source agent holds an input bit
and each agent holds an output bit. The goal is to let all agents converge
their output bits on the most frequent input bit of the sources (the majority
bit). Note that the particular case of a single source agent corresponds to the
classical problem of Broadcast. We concentrate on the severe fault-tolerant
context of self-stabilization, in which a correct configuration must be reached
eventually, despite all agents starting the execution with arbitrary initial
states.
We first design a general compiler which can essentially transform any
self-stabilizing algorithm with a certain property that uses -bits
messages to one that uses only -bits messages, while paying only a
small penalty in the running time. By applying this compiler recursively we
then obtain a self-stabilizing Clock Synchronization protocol, in which agents
synchronize their clocks modulo some given integer , within rounds w.h.p., and using messages that contain bits only.
We then employ the new Clock Synchronization tool to obtain a
self-stabilizing Majority Bit Dissemination protocol which converges in time, w.h.p., on every initial configuration, provided that the
ratio of sources supporting the minority opinion is bounded away from half.
Moreover, this protocol also uses only 3 bits per interaction.Comment: 28 pages, 4 figure
Bejeweled, Candy Crush and other Match-Three Games are (NP-)Hard
The twentieth century has seen the rise of a new type of video games targeted
at a mass audience of "casual" gamers. Many of these games require the player
to swap items in order to form matches of three and are collectively known as
\emph{tile-matching match-three games}. Among these, the most influential one
is arguably \emph{Bejeweled} in which the matched items (gems) pop and the
above gems fall in their place. Bejeweled has been ported to many different
platforms and influenced an incredible number of similar games. Very recently
one of them, named \emph{Candy Crush Saga} enjoyed a huge popularity and
quickly went viral on social networks. We generalize this kind of games by only
parameterizing the size of the board, while all the other elements (such as the
rules or the number of gems) remain unchanged. Then, we prove that answering
many natural questions regarding such games is actually \NP-Hard. These
questions include determining if the player can reach a certain score, play for
a certain number of turns, and others. We also
\href{http://candycrush.isnphard.com}{provide} a playable web-based
implementation of our reduction.Comment: 21 pages, 12 figure
Large Peg-Army Maneuvers
Despite its long history, the classical game of peg solitaire continues to
attract the attention of the scientific community. In this paper, we consider
two problems with an algorithmic flavour which are related with this game,
namely Solitaire-Reachability and Solitaire-Army. In the first one, we show
that deciding whether there is a sequence of jumps which allows a given initial
configuration of pegs to reach a target position is NP-complete. Regarding
Solitaire-Army, the aim is to successfully deploy an army of pegs in a given
region of the board in order to reach a target position. By solving an
auxiliary problem with relaxed constraints, we are able to answer some open
questions raised by Cs\'ak\'any and Juh\'asz (Mathematics Magazine, 2000). To
appreciate the combinatorial beauty of our solutions, we recommend to visit the
gallery of animations provided at http://solitairearmy.isnphard.com.Comment: Conference versio
Stabilizing Consensus with Many Opinions
We consider the following distributed consensus problem: Each node in a
complete communication network of size initially holds an \emph{opinion},
which is chosen arbitrarily from a finite set . The system must
converge toward a consensus state in which all, or almost all nodes, hold the
same opinion. Moreover, this opinion should be \emph{valid}, i.e., it should be
one among those initially present in the system. This condition should be met
even in the presence of an adaptive, malicious adversary who can modify the
opinions of a bounded number of nodes in every round.
We consider the \emph{3-majority dynamics}: At every round, every node pulls
the opinion from three random neighbors and sets his new opinion to the
majority one (ties are broken arbitrarily). Let be the number of valid
opinions. We show that, if , where is a
suitable positive constant, the 3-majority dynamics converges in time
polynomial in and with high probability even in the presence of an
adversary who can affect up to nodes at each round.
Previously, the convergence of the 3-majority protocol was known for
only, with an argument that is robust to adversarial errors. On
the other hand, no anonymous, uniform-gossip protocol that is robust to
adversarial errors was known for
Self-Stabilizing Repeated Balls-into-Bins
We study the following synchronous process that we call "repeated
balls-into-bins". The process is started by assigning balls to bins in
an arbitrary way. In every subsequent round, from each non-empty bin one ball
is chosen according to some fixed strategy (random, FIFO, etc), and re-assigned
to one of the bins uniformly at random.
We define a configuration "legitimate" if its maximum load is
. We prove that, starting from any configuration, the
process will converge to a legitimate configuration in linear time and then it
will only take on legitimate configurations over a period of length bounded by
any polynomial in , with high probability (w.h.p.). This implies that the
process is self-stabilizing and that every ball traverses all bins in
rounds, w.h.p
- …